safe system
Use of Retrieval-Augmented Large Language Model Agent for Long-Form COVID-19 Fact-Checking
Huang, Jingyi, Yang, Yuyi, Ji, Mengmeng, Alba, Charles, Zhang, Sheng, An, Ruopeng
The COVID-19 infodemic calls for scalable fact-checking solutions that handle long-form misinformation with accuracy and reliability. This study presents SAFE (system for accurate fact extraction and evaluation), an agent system that combines large language models with retrieval-augmented generation (RAG) to improve automated fact-checking of long-form COVID-19 misinformation. SAFE includes two agents - one for claim extraction and another for claim verification using LOTR-RAG, which leverages a 130,000-document COVID-19 research corpus. An enhanced variant, SAFE (LOTR-RAG + SRAG), incorporates Self-RAG to refine retrieval via query rewriting. We evaluated both systems on 50 fake news articles (2-17 pages) containing 246 annotated claims (M = 4.922, SD = 3.186), labeled as true (14.1%), partly true (14.4%), false (27.0%), partly false (2.2%), and misleading (21.0%) by public health professionals. SAFE systems significantly outperformed baseline LLMs in all metrics (p < 0.001). For consistency (0-1 scale), SAFE (LOTR-RAG) scored 0.629, exceeding both SAFE (+SRAG) (0.577) and the baseline (0.279). In subjective evaluations (0-4 Likert scale), SAFE (LOTR-RAG) also achieved the highest average ratings in usefulness (3.640), clearness (3.800), and authenticity (3.526). Adding SRAG slightly reduced overall performance, except for a minor gain in clearness. SAFE demonstrates robust improvements in long-form COVID-19 fact-checking by addressing LLM limitations in consistency and explainability. The core LOTR-RAG design proved more effective than its SRAG-augmented variant, offering a strong foundation for scalable misinformation mitigation.
- North America > United States (0.47)
- Asia > China > Shanghai > Shanghai (0.04)
- North America > Canada (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- Health & Medicine > Epidemiology (1.00)
Researchers reveal how they would deal with an AI uprising
As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. It's perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, 'Matrix'-like, as some sort of human battery. And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. One leading expert say he would'appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive' Why should a superintelligence keep us around? I would argue that I am a good person who might have even helped to bring about the superintelligence itself.
- North America > United States > Michigan (0.05)
- Europe > Ukraine > Kyiv Oblast > Chernobyl (0.05)
- Asia (0.05)
- Leisure & Entertainment (0.70)
- Health & Medicine (0.49)
- Energy > Power Industry (0.48)
- Media > Film (0.35)